302 research outputs found
UniFlow: A CFG-Based Framework for Pluggable Type Checking and Type Inference
A type system is a crucial component of high-level programming languages, as it enhances program correctness by ruling out certain type errors. However, the built-in type system often adheres to a specific set of rules defined by the language's specification (e.g., Java, Kotlin and C++). Pluggable type systems were then introduced as an idea to provide customizable type rules for different scenarios.
Various approaches exist for implementing a pluggable type system. The Checker Framework is a well-known framework to facilitate the development of type checkers for Java. This framework enables developers to define their type rules and override the analysis logic. Additionally, Checker Framework Inference is a framework built upon the Checker Framework to provide constraint-based whole-program inference. It helps to reduce the burden of manually annotating the codebase when applying a new type system.
However, the complexity of these frameworks presents a steep learning curve to type system developers. This work examines some of the critical issues encountered from our previous experience in developing these frameworks. The Checker Framework performs its analysis on two different program representations: abstract syntax tree (AST) and control flow graph (CFG). The shared responsibilities of these representations in the framework cause readability and maintainability issues for developers. Checker Framework Inference suffers not only from the same problem but also from difficulty in employing the same type rules for type checking and type inference. This is because the underlying Checker Framework assumes type rules can be checked modularly and immediately at any AST. In contrast, the type inference is not modular and generates constraints to be solved in a later stage.
We propose a novel CFG-based type system framework, UniFlow, addressing the aforementioned issues by providing a unified development process for type systems supporting both type checking and type inference. It strives to resolve types and apply type rules on the program's CFGs whenever possible. This approach reduces friction in type system development, allowing developers to focus on a single flow-sensitive program representation that is simpler than ASTs. It also forces developers to express type rules as constraints, such that the same set of type rules can be implemented once, but consistently reused in type checking and type inference. Moreover, our framework supports running multiple type systems and attempts to improve error message reporting for users.
We present UniFlow's architecture and explain each crucial component and functionality in detail. We discuss the advantages and limitations of our framework. Furthermore, we explore the initial implementation of the framework and outline future research directions
RayMVSNet++: Learning Ray-based 1D Implicit Fields for Accurate Multi-View Stereo
Learning-based multi-view stereo (MVS) has by far centered around 3D
convolution on cost volumes. Due to the high computation and memory consumption
of 3D CNN, the resolution of output depth is often considerably limited.
Different from most existing works dedicated to adaptive refinement of cost
volumes, we opt to directly optimize the depth value along each camera ray,
mimicking the range finding of a laser scanner. This reduces the MVS problem to
ray-based depth optimization which is much more light-weight than full cost
volume optimization. In particular, we propose RayMVSNet which learns
sequential prediction of a 1D implicit field along each camera ray with the
zero-crossing point indicating scene depth. This sequential modeling, conducted
based on transformer features, essentially learns the epipolar line search in
traditional multi-view stereo. We devise a multi-task learning for better
optimization convergence and depth accuracy. We found the monotonicity property
of the SDFs along each ray greatly benefits the depth estimation. Our method
ranks top on both the DTU and the Tanks & Temples datasets over all previous
learning-based methods, achieving an overall reconstruction score of 0.33mm on
DTU and an F-score of 59.48% on Tanks & Temples. It is able to produce
high-quality depth estimation and point cloud reconstruction in challenging
scenarios such as objects/scenes with non-textured surface, severe occlusion,
and highly varying depth range. Further, we propose RayMVSNet++ to enhance
contextual feature aggregation for each ray through designing an attentional
gating unit to select semantically relevant neighboring rays within the local
frustum around that ray. RayMVSNet++ achieves state-of-the-art performance on
the ScanNet dataset. In particular, it attains an AbsRel of 0.058m and produces
accurate results on the two subsets of textureless regions and large depth
variation.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence. arXiv
admin note: substantial text overlap with arXiv:2204.0132
A potential explanation for the effect of carbon source on the characteristics of acetate-fed and glucose-fed aerobic granules
This paper proposes a new theory to account for the effect of carbon source on the characteristics of acetate-fed and glucose-fed aerobic granules. It is well known that reactor pH can vary in response to the oxidation of glucose or sodium acetate. As such, the effects associated with the carbon sources may be explained by the changed pH. The proposal was explored by experiments. Aerobic granules were cultivated in three identical sequencing batch reactors (SBRs, R1, R2 and R3), fed with sodium acetate, glucose, glucose and maintained pH at 4.5 - 5.5 (the variation of reactor pH in the oxidation of glucose), 4.5 - 5.5 and 7.5 - 8.5 (the variation of reactor pH in the oxidation of sodium acetate), respectively, and the effects of carbon source and reactor pH on the characteristics of aerobic granules were assessed. The results showed that the characteristics of aerobic granules, including microbial structure, mixed liquor suspended solids (MLSS), sludge volume index (SVI) and nitrification-denitrification, were strongly affected by reactor pH, but were independent with the carbon source supplied. These results fully supported the validity of the new theory. The theory suggests that the cultivation of aerobic granules with glucose or sodium acetate should take more attention to reactor pH rather than carbon source itself. The implications of this theory are discussed with regards to the other common carbon sources as well as better understanding of the mechanisms of aerobic granulation.Keywords: Acetate-fed granules, glucose-fed granules, reactor pH, carbon source, characteristicsAfrican Journal of Biotechnology Vol. 9(33), pp. 5357-5365, 16 August, 201
GTC: Guided Training of CTC Towards Efficient and Accurate Scene Text Recognition
Connectionist Temporal Classification (CTC) and attention mechanism are two
main approaches used in recent scene text recognition works. Compared with
attention-based methods, CTC decoder has a much shorter inference time, yet a
lower accuracy. To design an efficient and effective model, we propose the
guided training of CTC (GTC), where CTC model learns a better alignment and
feature representations from a more powerful attentional guidance. With the
benefit of guided training, CTC model achieves robust and accurate prediction
for both regular and irregular scene text while maintaining a fast inference
speed. Moreover, to further leverage the potential of CTC decoder, a graph
convolutional network (GCN) is proposed to learn the local correlations of
extracted features. Extensive experiments on standard benchmarks demonstrate
that our end-to-end model achieves a new state-of-the-art for regular and
irregular scene text recognition and needs 6 times shorter inference time than
attentionbased methods.Comment: Accepted by AAAI 202
Delving into Crispness: Guided Label Refinement for Crisp Edge Detection
Learning-based edge detection usually suffers from predicting thick edges.
Through extensive quantitative study with a new edge crispness measure, we find
that noisy human-labeled edges are the main cause of thick predictions. Based
on this observation, we advocate that more attention should be paid on label
quality than on model design to achieve crisp edge detection. To this end, we
propose an effective Canny-guided refinement of human-labeled edges whose
result can be used to train crisp edge detectors. Essentially, it seeks for a
subset of over-detected Canny edges that best align human labels. We show that
several existing edge detectors can be turned into a crisp edge detector
through training on our refined edge maps. Experiments demonstrate that deep
models trained with refined edges achieve significant performance boost of
crispness from 17.4% to 30.6%. With the PiDiNet backbone, our method improves
ODS and OIS by 12.2% and 12.6% on the Multicue dataset, respectively, without
relying on non-maximal suppression. We further conduct experiments and show the
superiority of our crisp edge detection for optical flow estimation and image
segmentation.Comment: Accepted by TI
Revisiting Initializing Then Refining: An Incomplete and Missing Graph Imputation Network
With the development of various applications, such as social networks and
knowledge graphs, graph data has been ubiquitous in the real world.
Unfortunately, graphs usually suffer from being absent due to
privacy-protecting policies or copyright restrictions during data collection.
The absence of graph data can be roughly categorized into attribute-incomplete
and attribute-missing circumstances. Specifically, attribute-incomplete
indicates that a part of the attribute vectors of all nodes are incomplete,
while attribute-missing indicates that the whole attribute vectors of partial
nodes are missing. Although many efforts have been devoted, none of them is
custom-designed for a common situation where both types of graph data absence
exist simultaneously. To fill this gap, we develop a novel network termed
Revisiting Initializing Then Refining (RITR), where we complete both
attribute-incomplete and attribute-missing samples under the guidance of a
novel initializing-then-refining imputation criterion. Specifically, to
complete attribute-incomplete samples, we first initialize the incomplete
attributes using Gaussian noise before network learning, and then introduce a
structure-attribute consistency constraint to refine incomplete values by
approximating a structure-attribute correlation matrix to a high-order
structural matrix. To complete attribute-missing samples, we first adopt
structure embeddings of attribute-missing samples as the embedding
initialization, and then refine these initial values by adaptively aggregating
the reliable information of attribute-incomplete samples according to a dynamic
affinity structure. To the best of our knowledge, this newly designed method is
the first unsupervised framework dedicated to handling hybrid-absent graphs.
Extensive experiments on four datasets have verified that our methods
consistently outperform existing state-of-the-art competitors
- …